97 research outputs found
Reconstructing the Arches I: Constraining the Initial Conditions
We have performed a series of N-body simulations to model the Arches cluster.
Our aim is to find the best fitting model for the Arches cluster by comparing
our simulations with observational data and to constrain the parameters for the
initial conditions of the cluster. By neglecting the Galactic potential and
stellar evolution, we are able to efficiently search through a large parameter
space to determine e.g. the IMF, size, and mass of the cluster. We find, that
the cluster's observed present-day mass function can be well explained with an
initial Salpeter IMF. The lower mass-limit of the IMF cannot be well
constrained from our models. In our best models, the total mass and the virial
radius of the cluster are initially (5.1 +/- 0.8) 10^4 Msun and 0.76 +/- 0.12
pc, respectively. The concentration parameter of the initial King model is w0 =
3-5.Comment: 12 pages, 14 Figures, revised and accepted for publication in MNRA
Chemo-dynamical Evolution of the ISM in Galaxies
Chemo-dynamical models have been introduced in the late eighties and are a
generally accepted tool for understanding galaxy evolution. They have been
successfully applied to one-dimensional problems, e.g. the evolution of
non-rotating galaxies, and two-dimensional problems, e.g. the evolution of disk
galaxies. Recently, also three-dimensional chemo-dynamical models have become
available. In these models the dynamics of different components, i.e. dark
matter, stars and a multi-phase interstellar medium, are treated in a
self-consistent way and several processes allow for an exchange of matter,
energy and momentum between the components or different gas phases. Some
results of chemo-dynamical models and their comparison with observations of
chemical abundances or star formation histories will be reviewed.Comment: 10 Pages, 5 Figures, to appear in "From Observations to
Self-Consistent Modelling of the ISM in Galaxies", 2003, eds M. Avillez et a
A Hybrid N-Body Code Incorporating Algorithmic Regularization and Post-Newtonian Forces
We describe a novel N-body code designed for simulations of the central
regions of galaxies containing massive black holes. The code incorporates
Mikkola's 'algorithmic' chain regularization scheme including post-Newtonian
terms up to PN2.5 order. Stars moving beyond the chain are advanced using a
fourth-order integrator with forces computed on a GRAPE board. Performance
tests confirm that the hybrid code achieves better energy conservation, in less
elapsed time, than the standard scheme and that it reproduces the orbits of
stars tightly bound to the black hole with high precision. The hybrid code is
applied to two sample problems: the effect of finite-N gravitational
fluctuations on the orbits of the S-stars; and inspiral of an intermediate-mass
black hole into the galactic center.Comment: 12 pages, 15 figures, accepted for publication in MNRA
Direct -body code on low-power embedded ARM GPUs
This work arises on the environment of the ExaNeSt project aiming at design
and development of an exascale ready supercomputer with low energy consumption
profile but able to support the most demanding scientific and technical
applications. The ExaNeSt compute unit consists of densely-packed low-power
64-bit ARM processors, embedded within Xilinx FPGA SoCs. SoC boards are
heterogeneous architecture where computing power is supplied both by CPUs and
GPUs, and are emerging as a possible low-power and low-cost alternative to
clusters based on traditional CPUs. A state-of-the-art direct -body code
suitable for astrophysical simulations has been re-engineered in order to
exploit SoC heterogeneous platforms based on ARM CPUs and embedded GPUs.
Performance tests show that embedded GPUs can be effectively used to accelerate
real-life scientific calculations, and that are promising also because of their
energy efficiency, which is a crucial design in future exascale platforms.Comment: 16 pages, 7 figures, 1 table, accepted for publication in the
Computing Conference 2019 proceeding
Dynamics in Young Star Clusters: From Planets to Massive Stars
The young star clusters we observe today are the building blocks of a new
generation of stars and planets in our Galaxy and beyond. Despite their
fundamental role we still lack knowledge about the conditions under which star
clusters form and the impact of these often harsh environments on the evolution
of their stellar and substellar members. We demonstrate the vital role
numerical simulations play to uncover both key issues. Using dynamical models
of different star cluster environments we show the variety of effects stellar
interactions potentially have. Moreover, our significantly improved measure of
mass segregation reveals that it can occur rapidly even for star clusters
without substructure. This finding is a critical step to resolve the
controversial debate on mass segregation in young star clusters and provides
strong constraints on their initial conditions.Comment: 8 pages, 4 figures; to appear in the proceedings of "Stellar Clusters
and Associations - A RIA workshop on Gaia", 23-27 May 2011, Granada, Spai
How well do STARLAB and NBODY compare? II: Hardware and accuracy
Most recent progress in understanding the dynamical evolution of star
clusters relies on direct N-body simulations. Owing to the computational
demands, and the desire to model more complex and more massive star clusters,
hardware calculational accelerators, such as GRAPE special-purpose hardware or,
more recently, GPUs (i.e. graphics cards), are generally utilised. In addition,
simulations can be accelerated by adjusting parameters determining the
calculation accuracy (i.e. changing the internal simulation time step used for
each star).
We extend our previous thorough comparison (Anders et al. 2009) of basic
quantities as derived from simulations performed either with STARLAB/KIRA or
NBODY6. Here we focus on differences arising from using different hardware
accelerations (including the increasingly popular graphic card
accelerations/GPUs) and different calculation accuracy settings.
We use the large number of star cluster models (for a fixed stellar mass
function, without stellar/binary evolution, primordial binaries, external tidal
fields etc) already used in the previous paper, evolve them with STARLAB/KIRA
(and NBODY6, where required), analyse them in a consistent way and compare the
averaged results quantitatively. For this quantitative comparison, we apply the
bootstrap algorithm for functional dependencies developed in our previous
study.
In general we find very high comparability of the simulation results,
independent of the used computer hardware (including the hardware accelerators)
and the used N-body code. For the tested accuracy settings we find that for
reduced accuracy (i.e. time step at least a factor 2.5 larger than the standard
setting) most simulation results deviate significantly from the results using
standard settings. The remaining deviations are comprehensible and explicable.Comment: 14 pages incl. 3 pages with figures and 4 pages of tables (analysis
results), MNRAS in pres
A pilgrimage to gravity on GPUs
In this short review we present the developments over the last 5 decades that
have led to the use of Graphics Processing Units (GPUs) for astrophysical
simulations. Since the introduction of NVIDIA's Compute Unified Device
Architecture (CUDA) in 2007 the GPU has become a valuable tool for N-body
simulations and is so popular these days that almost all papers about high
precision N-body simulations use methods that are accelerated by GPUs. With the
GPU hardware becoming more advanced and being used for more advanced algorithms
like gravitational tree-codes we see a bright future for GPU like hardware in
computational astrophysics.Comment: To appear in: European Physical Journal "Special Topics" : "Computer
Simulations on Graphics Processing Units" . 18 pages, 8 figure
MYRIAD: A new N-body code for simulations of Star Clusters
We present a new C++ code for collisional N-body simulations of star
clusters. The code uses the Hermite fourth-order scheme with block time steps,
for advancing the particles in time, while the forces and neighboring particles
are computed using the GRAPE-6 board. Special treatment is used for close
encounters, binary and multiple sub-systems that either form dynamically or
exist in the initial configuration. The structure of the code is modular and
allows the appropriate treatment of more physical phenomena, such as stellar
and binary evolution, stellar collisions and evolution of close black-hole
binaries. Moreover, it can be easily modified so that the part of the code that
uses GRAPE-6, could be replaced by another module that uses other
accelerating-hardware like the Graphics Processing Units (GPUs). Appropriate
choice of the free parameters give a good accuracy and speed for simulations of
star clusters up to and beyond core collapse. Simulations of Plummer models
consisting of equal-mass stars reached core collapse at t~17 half-mass
relaxation times, which compares very well with existing results, while the
cumulative relative error in the energy remained below 0.001. Also, comparisons
with published results of other codes for the time of core collapse for
different initial conditions, show excellent agreement. Simulations of King
models with an initial mass-function, similar to those found in the literature,
reached core collapse at t~0.17, which is slightly smaller than the expected
result from previous works. Finally, the code accuracy becomes comparable and
even better than the accuracy of existing codes, when a number of close binary
systems is dynamically created in a simulation. This is due to the high
accuracy of the method that is used for close binary and multiple sub-systems.Comment: 24 pages, 29 figures, accepted for publication to Astronomy &
Astrophysic
- …